Web Survey Bibliography
Sensitive behavioral questions in surveys often result in self-reports which are distorted by social desirability bias. Interviewees underreport socially undesirable behavior and overreport socially desirable activities. Such systematic measurement error in turn generates erroneous prevalence estimates of the behavior in question. Standard literature on survey methodology often recommends (1) positive loading of sensitive questions, e.g. using forgiving wording, or (2) choosing a permissive question context to encourage interviewees to answer more honestly. However, only few attempts to systematically validate these recommendations can be found in the experimental literature. Based on theories of cognitive dissonance (Festinger 1957; Aronson 1999) and the inclusion-/exclusion-model (Schwarz & Bless 1992, 2007), we derive explanations how manipulations of question wording and context could elicit more socially undesirable answers in sensitive surveys. In an experimental online survey (N=1176, 4 splits), we evaluate the eects of (1) forgiving-wording and (2) question context (permissive versus restrictive) on social desirability bias in dierent sensitive behavioral questions. Consistent with former experimental ndings (Catania et al. 1996; Holtgraves et al. 1997; Presser 1990; Tourangeau & Smith 1996) the empirical evidence of the predicted eects is mixed. Thus, the assumed social desirability bias-reducing effect of forgiving wording and permissive question context respectively should not be taken for granted.
Conference homepage (abstract)